Skip to main content

Home/ Educational Analytics/ Group items tagged case study

Rss Feed Group items tagged

George Bradford

Open Research Online - Learning analytics to identify exploratory dialogue within synch... - 0 views

  •  
    While generic web analytics tend to focus on easily harvested quantitative data, Learning Analytics will often seek qualitative understanding of the context and meaning of this information. This is critical in the case of dialogue, which may be employed to share knowledge and jointly construct understandings, but which also involves many superficial exchanges. Previous studies have validated a particular pattern of "exploratory dialogue" in learning environments to signify sharing, challenge, evaluation and careful consideration by participants. This study investigates the use of sociocultural discourse analysis to analyse synchronous text chat during an online conference. Key words and phrases indicative of exploratory dialogue were identified in these exchanges, and peaks of exploratory dialogue were associated with periods set aside for discussion and keynote speakers. Fewer individuals posted at these times, but meaningful discussion outweighed trivial exchanges. If further analysis confirms the validity of these markers as learning analytics, they could be used by recommendation engines to support learners and teachers in locating dialogue exchanges where deeper learning appears to be taking place.
George Bradford

UTS Case Study - Ascilite2015 Learning Analytics Workshop - 0 views

  •  
    Simon Shum - Introducing Analytics at UTS - Academic Writing Analytics (slideshare)
George Bradford

Open Research Online - Contested Collective Intelligence: rationale, technologies, and ... - 0 views

  •  
    We propose the concept of Contested Collective Intelligence (CCI) as a distinctive subset of the broader Collective Intelligence design space. CCI is relevant to the many organizational contexts in which it is important to work with contested knowledge, for instance, due to different intellectual traditions, competing organizational objectives, information overload or ambiguous environmental signals. The CCI challenge is to design sociotechnical infrastructures to augment such organizational capability. Since documents are often the starting points for contested discourse, and discourse markers provide a powerful cue to the presence of claims, contrasting ideas and argumentation, discourse and rhetoric provide an annotation focus in our approach to CCI. Research in sensemaking, computer-supported discourse and rhetorical text analysis motivate a conceptual framework for the combined human and machine annotation of texts with this specific focus. This conception is explored through two tools: a social-semantic web application for human annotation and knowledge mapping (Cohere), plus the discourse analysis component in a textual analysis software tool (Xerox Incremental Parser: XIP). As a step towards an integrated platform, we report a case study in which a document corpus underwent independent human and machine analysis, providing quantitative and qualitative insight into their respective contributions. A promising finding is that significant contributions were signalled by authors via explicit rhetorical moves, which both human analysts and XIP could readily identify. Since working with contested knowledge is at the heart of CCI, the evidence that automatic detection of contrasting ideas in texts is possible through rhetorical discourse analysis is progress towards the effective use of automatic discourse analysis in the CCI framework.
George Bradford

Program Evaluation Standards « Joint Committee on Standards for Educational E... - 0 views

  •  
    "   Welcome to the Program Evaluation Standards, 3rd Edition   Standards Names and Statements Errata Sheet for the book   After seven years of systematic effort and much study, the 3rd edition of the Program Evaluation Standards was published this fall by Sage Publishers: http://www.sagepub.com/booksProdDesc.nav?prodId=Book230597&_requestid=255617. The development process relied on formal and informal needs assessments, reviews of existing scholarship, and the involvement of more than 400 stakeholders in national and international reviews, field trials, and national hearings. It's the first revision of the standards in 17 years. This third edition is similar to the previous two editions (1981, 1994) in many respects, for example, the book is organized into the same four dimensions of evaluation quality (utility, feasibility, propriety, and accuracy). It also still includes the popular and useful "Functional Table of Standards," a glossary, extensive documentation, information about how to apply the standards, and numerous case applications."
1 - 4 of 4
Showing 20 items per page